Goto

Collaborating Authors

 tech ethics


NVIDIA's AI Ethics Chief: 'You Need Common Sense'

#artificialintelligence

Now senior director for AI and legal ethics at NVIDIA, Pope spends her days working with internal teams across the company to ensure its products engender trust across industries. In a recent "Solving for Tech Ethics" podcast, Pope joined Beena Ammanath, Deloitte LLP's Trustworthy and Ethical Technology leader, to discuss the challenges and opportunities associated with creating trustworthy AI. Ammanath: Five or 10 years ago, roles like yours just didn't exist. What does a day in your job look like? Pope: One day does not look like the next. Take yesterday as an example.


The Contestation of Tech Ethics: A Sociotechnical Approach to Ethics and Technology in Action

Green, Ben

arXiv.org Artificial Intelligence

Recent controversies related to topics such as fake news, privacy, and algorithmic bias have prompted increased public scrutiny of digital technologies and soul-searching among many of the people associated with their development. In response, the tech industry, academia, civil society, and governments have rapidly increased their attention to "ethics" in the design and use of digital technologies ("tech ethics"). Yet almost as quickly as ethics discourse has proliferated across the world of digital technologies, the limitations of these approaches have also become apparent: tech ethics is vague and toothless, is subsumed into corporate logics and incentives, and has a myopic focus on individual engineers and technology design rather than on the structures and cultures of technology production. As a result of these limitations, many have grown skeptical of tech ethics and its proponents, charging them with "ethics-washing": promoting ethics research and discourse to defuse criticism and government regulation without committing to ethical behavior. By looking at how ethics has been taken up in both science and business in superficial and depoliticizing ways, I recast tech ethics as a terrain of contestation where the central fault line is not whether it is desirable to be ethical, but what "ethics" entails and who gets to define it. This framing highlights the significant limits of current approaches to tech ethics and the importance of studying the formulation and real-world effects of tech ethics. In order to identify and develop more rigorous strategies for reforming digital technologies and the social relations that they mediate, I describe a sociotechnical approach to tech ethics, one that reflexively applies many of tech ethics' own lessons regarding digital technologies to tech ethics itself.


Why business cannot afford to ignore tech ethics

#artificialintelligence

From one angle, the pandemic looks like a vindication of "techno-solutionism". From the more everyday developments of teleconferencing to systems exploiting advanced artificial intelligence, platitudes to the power of innovation abound. Such optimism smacks of short-termism. Desperate times often call for swift and sweeping solutions, but implementing technologies without regard for their impact is risky and increasingly unacceptable to wider society. The business leaders of the future who purchase and deploy such systems face costly repercussions, both financial and reputational. Tech ethics, while a relatively new field, has suffered from perceptions that it is either the domain of philosophers or PR people.


Will the future of work be ethical? Perspectives from MIT Technology Review – TechCrunch

#artificialintelligence

In June, TechCrunch Ethicist in Residence Greg M. Epstein attended EmTech Next, a conference organized by the MIT Technology Review. The conference, which took place at MIT's famous Media Lab, examined how AI and robotics are changing the future of work. Greg's essay, Will the Future of Work Be Ethical? reflects on his experiences at the conference, which produced what he calls "a religious crisis, despite the fact that I am not just a confirmed atheist but a professional one as well." In it, Greg explores themes of inequality, inclusion and what it means to work in technology ethically, within a capitalist system and market economy. Accompanying the story for Extra Crunch are a series of in-depth interviews Greg conducted around the conference, with scholars, journalists, founders and attendees. Below he speaks to two key organizers: Gideon Lichfield, the editor in chief of the MIT Technology Review, and Karen Hao, its artificial intelligence reporter.


What Does an AI Ethicist Do?

#artificialintelligence

Microsoft was one of the earliest companies to begin discussing and advocating for an ethical perspective on artificial intelligence. The issue began to take off at the company in 2016, when CEO Satya Nadella spoke at a developer conference about how the company viewed some of the ethical issues around AI, and later that year published an article about these issues. Nadella's primary focus was on Microsoft's orientation toward using AI to augment human capabilities and building trust into intelligent products. The next year, Microsoft's R&D head Eric Horvitz partnered with Microsoft's president and chief legal officer Brad Smith to form Aether, a cross-functional committee addressing AI and ethics in engineering and research. With these foundations laid, in 2018, Microsoft established a full-time position in AI policy and ethics.


The winter of AI discontent – thoughts on trends in tech ethics

#artificialintelligence

Ethics is a hot-button topic of discussion for the technology industry right now, especially in the rapidly developing fields around artificial intelligence (AI), machine learning and automated decision systems. Consumers and tech industry workers alike are raising their voices to influence what kinds of automated decision-making systems are designed, what decisions they should be allowed to make – both in terms of industry verticals and specific applications within them – and what datasets should be used to build the models that power those systems. Google, for example, is pledging $25m towards "AI for social good". Yet some critics say they are "filled with dread" at the prospect of efforts spearheaded by a tech giant that may accelerate the dominance of one mode of thinking to the detriment of other groups. There is a growing emphasis on de-risking negative public perception of the tech industry through robust policies on data privacy and data security.


Interview with Fiona McEvoy, YouTheData

#artificialintelligence

The way people interact with technology is always evolving. Think about children today - give them a tablet or a smartphone and they have literally no problem in figuring out how to work it. Whilst this is a natural evolution of our relationships with new tech, as it becomes more and more ingrained in our lives it's important to think about the ethical implications. This isn't the first time I've spoken about ethics and AI - I"ve had guests on the Women in AI Podcast such as Cansu Canca from the AI Ethics Lab and Yasmin J. Erden from St Mary's University amongst others join me to discuss this area, and I even wrote a white paper on the topic which is on RE•WORK's digital content hub - so it's something that's really causing conversation at the moment. Fiona McEvoy, the founder of YouTheData.com, joined me on the podcast back in June to discuss the importance of collaboration in AI to ensure it's ethically sound.